In: Political analysis: official journal of the Society for Political Methodology, the Political Methodology Section of the American Political Science Association, Band 23, Heft 2, S. 159-179
To what extent are Internet users resilient to online censorship? When does censorship influence consumption of information and when does it create backlash? Drawing on a growing literature on Internet users' reactions to censorship, I posit that awareness of censorship, incentives to seek out information, and resources to circumvent censorship are essential to resilience to censorship. I describe how authoritarian regimes have adapted their strategies of censorship to reduce both awareness of censorship and demand for uncensored information.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 24, Heft V10, S. 1-5
Social media companies have come under increasing pressure to remove misinformation from their platforms, but partisan disagreements over what should be removed have stymied efforts to deal with misinformation in the United States. Current explanations for these disagreements center on the "fact gap"—differences in perceptions about what is misinformation. We argue that partisan differences could also be due to "party promotion"—a desire to leave misinformation online that promotes one's own party—or a "preference gap"—differences in internalized preferences about whether misinformation should be removed. Through an experiment where respondents are shown false headlines aligned with their own or the opposing party, we find some evidence of party promotion among Democrats and strong evidence of a preference gap between Democrats and Republicans. Even when Republicans agree that content is false, they are half as likely as Democrats to say that the content should be removed and more than twice as likely to consider removal as censorship.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 23, Heft 2, S. 159-179
"Robust standard errors" are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help researchers realize these gains via a more productive way to understand and use robust standard errors; a new general and easier-to-use "generalized information matrix test" statistic that can formally assess misspecification (based on differences between robust and classical variance estimates); and practical illustrations via simulations and real examples from published research. How robust standard errors are used needs to change, but instead of jettisoning this popular tool we show how to use it to provide effective clues about model misspecification, likely biases, and a guide to considerably more reliable, and defensible, inferences. Accompanying this article is software that implements the methods we describe.
Crisis motivates people to track news closely, and this increased engagement can expose individuals to politically sensitive information unrelated to the initial crisis. We use the case of the COVID-19 outbreak in China to examine how crisis affects information seeking in countries that normally exert significant control over access to media. The crisis spurred censorship circumvention and access to international news and political content on websites blocked in China. Once individuals circumvented censorship, they not only received more information about the crisis itself but also accessed unrelated information that the regime has long censored. Using comparisons to democratic and other authoritarian countries also affected by early outbreaks, the findings suggest that people blocked from accessing information most of the time might disproportionately and collectively access that long-hidden information during a crisis. Evaluations resulting from this access, negative or positive for a government, might draw on both current events and censored history.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 31, Heft 4, S. 575-590
AbstractScholars, pundits, and politicians use opinion surveys to study citizen beliefs about political facts, such as the current unemployment rate, and more conspiratorial beliefs, such as whether Barack Obama was born abroad. Many studies, however, ignore acquiescence-response bias, the tendency for survey respondents to endorse any assertion made in a survey question regardless of content. With new surveys fielding questions asked in recent scholarship, we show that acquiescence bias inflates estimated incidence of conspiratorial beliefs and political misperceptions in the United States and China by up to 50%. Acquiescence bias is disproportionately prevalent among more ideological respondents, inflating correlations between political ideology such as conservatism and endorsement of conspiracies or misperception of facts. We propose and demonstrate two methods to correct for acquiescence bias.
Conventional wisdom assumes that increased censorship will strictly decrease access to information. We delineate circumstances when increases in censorship expand access to information for a substantial subset of the population. When governments suddenly impose censorship on previously uncensored information, citizens accustomed to acquiring this information will be incentivized to learn methods of censorship evasion. These evasion tools provide continued access to the newly blocked information—and also extend users' ability to access information that has long been censored. We illustrate this phenomenon using millions of individual-level actions of social media users in China before and after the block of Instagram. We show that the block inspired millions of Chinese users to acquire virtual private networks, and that these users subsequently joined censored websites like Twitter and Facebook. Despite initially being apolitical, these new users began browsing blocked political pages on Wikipedia, following Chinese political activists on Twitter, and discussing highly politicized topics such as opposition protests in Hong Kong.
In: Political analysis: official journal of the Society for Political Methodology, the Political Methodology Section of the American Political Science Association, Band 23, Heft 2, S. 254-277
The Chinese government has long been suspected of hiring as many as 2 million people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called 50c party posts vociferously argue for the government's side in political and policy debates. As we show, this is also true of most posts openly accused on social media of being 50c. Yet almost no systematic empirical evidence exists for this claim or, more importantly, for the Chinese regime's strategic objective in pursuing this activity. In the first large-scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime's strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program and suggest how they may change our broader theoretical understanding of "common knowledge" and information control in authoritarian regimes.